18 research outputs found

    Shared Autonomy via Hindsight Optimization

    Full text link
    In shared autonomy, user input and robot autonomy are combined to control a robot to achieve a goal. Often, the robot does not know a priori which goal the user wants to achieve, and must both predict the user's intended goal, and assist in achieving that goal. We formulate the problem of shared autonomy as a Partially Observable Markov Decision Process with uncertainty over the user's goal. We utilize maximum entropy inverse optimal control to estimate a distribution over the user's goal based on the history of inputs. Ideally, the robot assists the user by solving for an action which minimizes the expected cost-to-go for the (unknown) goal. As solving the POMDP to select the optimal action is intractable, we use hindsight optimization to approximate the solution. In a user study, we compare our method to a standard predict-then-blend approach. We find that our method enables users to accomplish tasks more quickly while utilizing less input. However, when asked to rate each system, users were mixed in their assessment, citing a tradeoff between maintaining control authority and accomplishing tasks quickly

    Autonomy Infused Teleoperation with Application to BCI Manipulation

    Full text link
    Robot teleoperation systems face a common set of challenges including latency, low-dimensional user commands, and asymmetric control inputs. User control with Brain-Computer Interfaces (BCIs) exacerbates these problems through especially noisy and erratic low-dimensional motion commands due to the difficulty in decoding neural activity. We introduce a general framework to address these challenges through a combination of computer vision, user intent inference, and arbitration between the human input and autonomous control schemes. Adjustable levels of assistance allow the system to balance the operator's capabilities and feelings of comfort and control while compensating for a task's difficulty. We present experimental results demonstrating significant performance improvement using the shared-control assistance framework on adapted rehabilitation benchmarks with two subjects implanted with intracortical brain-computer interfaces controlling a seven degree-of-freedom robotic manipulator as a prosthetic. Our results further indicate that shared assistance mitigates perceived user difficulty and even enables successful performance on previously infeasible tasks. We showcase the extensibility of our architecture with applications to quality-of-life tasks such as opening a door, pouring liquids from containers, and manipulation with novel objects in densely cluttered environments

    Acting under Uncertainty for Information Gathering and Shared Autonomy

    No full text
    <p>Acting under uncertainty is a fundamental challenge for any decision maker in the real world. As uncertainty is often the culprit of failure, many prior works attempt to reduce the problem to one with a known state. However, this fails to account for a key property of acting under uncertainty: we can often gain utility while uncertain. This thesis presents methods that utilize this property in two domains: active information gathering and shared autonomy. For active information gathering, we present a general framework for reducing uncertainty just enough to make a decision. To do so, we formulate the Decision Region Determination (DRD) problem, modelling how uncertainty impedes decision making. We present two methods for solving this problem, differing in their computational efficiency and performance bounds. We show that both satisfy adaptive submodularity, a natural diminishing returns property that imbues efficient greedy policies with near-optimality guarantees. Empirically, we show that our methods outperform those which reduce uncertainty without considering how it affects decision making. For shared autonomy, we first show how the general problem of assisting with an unknown user goal can be modelled as one of acting under uncertainty. We then present our framework, based on Hindsight Optimization or QMDP, enabling us assist for a distribution of user goals by minimizing the expected cost. We evaluate our framework on real users, demonstrating that our method achieves goals faster, requires less user input, decreases user idling time, and results in fewer user-robot collisions than those which rely on predicting a single user goal. Finally, we extend our framework to learn how user behavior changes with assistance, and incorporate this model into cost minimization.</p

    Shared Autonomy via Hindsight Optimization

    No full text
    <p>In shared autonomy, user input and robot autonomy are combined to control a robot to achieve a goal. Often, the robot does not know a priori which goal the user wants to achieve, and must both predict the user’s intended goal, and assist in achieving that goal. We formulate the problem of shared autonomy as a Partially Observable Markov Decision Process with uncertainty over the user’s goal. We utilize maximum entropy inverse optimal control to estimate a distribution over the user’s goal based on the history of inputs. Ideally, the robot assists the user by solving for an action which minimizes the expected cost-to-go for the (unknown) goal. As solving the POMDP to select the optimal action is intractable, we use hindsight optimization to approximate the solution. In a user study, we compare our method to a standard predict-then-blend approach. We find that our method enables users to accomplish tasks more quickly while utilizing less input. However, when asked to rate each system, users were mixed in their assessment, citing a tradeoff between maintaining control authority and accomplishing tasks quickly</p

    Modeling and Perception of Deformable One-Dimensional Objects

    No full text
    Abstract — Recent advances in the modeling of deformable one-dimensional objects (DOOs) such as surgical suture, rope, and hair show significant promise for improving the simulation, perception, and manipulation of such objects. An important application of these tasks lies in the area of medical robotics, where robotic surgical assistants have the potential to greatly reduce surgeon fatigue and human error by improving the accuracy, speed, and robustness of surgical tasks such as suturing. However, different types of DOOs exhibit a variety of bending and twisting behaviors that are highly dependent on material properties. This paper proposes an approach for fitting simulation models of DOOs to observed data. Our approach learns an energy function such that observed DOO configurations lie in local energy minima. Our experiments on a variety of DOOs show that models fitted to different types of DOOs using our approach enable accurate prediction of future configurations. Additionally, we explore the application of our learned model to the perception of DOOs. I
    corecore